Should Robots Kill? Moral Judgments for Actions of Artificial Cognitive Agents
نویسندگان
چکیده
Moral dilemmas are used to study the situations in which there is a conflict between two moral rules: e.g. is it permissible to kill one person in order to save more people. In standard moral dilemmas the protagonist is a human. However, the recent progress in robotics leads to the question of how artificial cognitive agents should act in situations involving moral dilemmas. Here, we study moral judgments when the protagonist in the dilemma is an artificial cognitive agent – a humanoid robot or an automated system – and compare them to moral judgments for the same action taken by a human agent. Participants are asked to choose the appropriate protagonist action, to evaluate the rightness and the moral permissibility of the utilitarian action, and the blameworthiness of the agent. We also investigate the role of the instrumentality of the inflicted harm. The main results are that participants rate the utilitarian actions of a humanoid robot or of an automated system as more morally permissible than the same actions of a human. The act of killing undertaken by a humanoid robot is rated as less blameworthy than the action done by a human or by an automated system. The results are interpreted and discussed in terms of responsibility and intentionality as characteristics of moral agency.
منابع مشابه
The Functional Morality of Robots
It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood...
متن کاملConstrained Incrementalist Moral Decision Making for a Biologically Inspired Cognitive Architecture
The field of machine ethics has emerged in response to the development of autonomous artificial agents with the ability to interact with human beings, or to produce changes in the environment which can affect humans (Allen, Varner, & Zinser, 2000). Such agents, whether physical (robots) or virtual (software agents) need a mechanism for moral decision making in order to ensure that their actions...
متن کاملThe Social and Moral Cognition of Group Agents
To better understand the possibility, scope, and limits of punishment for groups we must understand how humans conceptualize group agents, interpret their actions, and make moral judgments about them. In this article I therefore examine the social-cognitive foundations for human perceptions of groups and the moral evaluations of their conduct. Part I identifies the conceptual framework within w...
متن کاملToward Morality and Ethics for Robots
Humans need morality and ethics to get along constructively as members of the same society. As we face the prospect of robots taking a larger role in society, we need to consider how they, too, should behave toward other members of society. To the extent that robots will be able to act as agents in their own right, as opposed to being simply tools controlled by humans, they will need to behave ...
متن کاملOutline of a sensory-motor perspective on intrinsically moral agents
We propose that moral behavior of artificial agents could (and should) be intrinsically grounded in their own sensorymotor experiences. Such an ability depends critically on seven types of competences. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their in...
متن کامل